dual coordinate ascent
- Europe > United Kingdom (0.14)
- Asia > China > Hong Kong (0.04)
- North America > United States > New Jersey > Middlesex County > Piscataway (0.04)
077e29b11be80ab57e1a2ecabb7da330-Reviews.html
First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. This paper studies a mini-batch gradient method for dual coordinate ascent. The idea is simple: at each iteration randomly pick m samples and update the gradient. The authors prove that the convergence rate of the mini-batch method interpolates between SDCA and AGD -- in certain circumstances it could be faster than both. I am a little surprised that in case of gamma*lambda*n = O(1), the number of examples processed by ASDCA is n*\sqrt{m}, which means that in full parallelization m machines give an acceleration rate of \sqrt{m}.
- Europe > United Kingdom (0.14)
- Asia > China > Hong Kong (0.04)
- North America > United States > New Jersey > Middlesex County > Piscataway (0.04)
Communication-Efficient Distributed Dual Coordinate Ascent
Communication remains the most significant bottleneck in the performance of distributed optimization algorithms for large-scale machine learning. In this paper, we propose a communication-efficient framework, COCOA, that uses local computation in a primal-dual setting to dramatically reduce the amount of necessary communication. We provide a strong convergence rate analysis for this class of algorithms, as well as experiments on real-world distributed datasets with implementations in Spark. In our experiments, we find that as compared to state-of-the-art mini-batch versions of SGD and SDCA algorithms, COCOA converges to the same .001-accurate
- North America > United States > Virginia (0.08)
- Asia > Middle East > Jordan (0.08)
- Europe > Switzerland > Zürich > Zürich (0.04)
- Asia > Middle East > Jordan (0.04)
- North America > United States > Virginia (0.04)
Communication-Efficient Distributed Dual Coordinate Ascent
Communication remains the most significant bottleneck in the performance of distributed optimization algorithms for large-scale machine learning. In this paper, we propose a communication-efficient framework, COCOA, that uses local computation in a primal-dual setting to dramatically reduce the amount of necessary communication. We provide a strong convergence rate analysis for this class of algorithms, as well as experiments on real-world distributed datasets with implementations in Spark. In our experiments, we find that as compared to state-of-the-art mini-batch versions of SGD and SDCA algorithms, COCOA converges to the same .001-accurate
Accelerated Mini-Batch Stochastic Dual Coordinate Ascent
Shai Shalev-Shwartz, Tong Zhang
Stochastic dual coordinate ascent (SDCA) is an effective technique for solving regularized loss minimization problems in machine learning. This paper considers an extension of SDCA under the mini-batch setting that is often used in practice. Our main contribution is to introduce an accelerated mini-batch version of SDCA and prove a fast convergence rate for this method. We discuss an implementation of our method over a parallel computing system, and compare the results to both the vanilla stochastic dual coordinate ascent and to the accelerated deterministic gradient descent method of Nesterov [2007].
- North America > United States (0.04)
- Asia > Middle East > Israel > Jerusalem District > Jerusalem (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
- Asia > Middle East > Jordan (0.04)
- North America > United States > Virginia (0.04)
Communication-Efficient Distributed Dual Coordinate Ascent
Jaggi, Martin, Smith, Virginia, Takac, Martin, Terhorst, Jonathan, Krishnan, Sanjay, Hofmann, Thomas, Jordan, Michael I.
Communication remains the most significant bottleneck in the performance of distributed optimization algorithms for large-scale machine learning. In this paper, we propose a communication-efficient framework, COCOA, that uses local computation in a primal-dual setting to dramatically reduce the amount of necessary communication. We provide a strong convergence rate analysis for this class of algorithms, as well as experiments on real-world distributed datasets with implementations in Spark. In our experiments, we find that as compared to state-of-the-art mini-batch versions of SGD and SDCA algorithms, COCOA converges to the same .001-accurate Papers published at the Neural Information Processing Systems Conference.
Straggler-Agnostic and Communication-Efficient Distributed Primal-Dual Algorithm for High-Dimensional Data Mining
--Recently, reducing the communication time between machines becomes the main focus of the distributed data mining. Previous methods propose to make workers do more computation locally before aggregating local solutions in the server such that fewer communication rounds between server and workers are required. However, these methods do not consider reducing the communication time per round and work very poor under certain conditions, for example, when there are straggler problems or the dataset is of high dimension. In this paper, we target to reduce communication time per round as well as the required communication rounds. We propose a communication-efficient distributed primal-dual method with straggler-agnostic server and bandwidth-efficient workers. We analyze the convergence property and prove that the proposed method guarantees linear convergence rate to the optimal solution for convex problems. Finally, we conduct large-scale experiments in simulated and real distributed systems and experimental results demonstrate that the proposed method is much faster than compared methods. Distributed optimization methods are nontrivial when we optimize a data mining problem when the data or model is distributed across multiple machines. When data are distributed, parameter server [6], [14] or decentralized methods [15], [16] were proposed for parallel computation and linear speedup.
- North America > United States > Virginia (0.04)
- Asia > Middle East > Jordan (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)